13 research outputs found
Mitigating Quantum Gate Errors for Variational Eigensolvers Using Hardware-Inspired Zero-Noise Extrapolation
Variational quantum algorithms have emerged as a cornerstone of contemporary
quantum algorithms research. Practical implementations of these algorithms,
despite offering certain levels of robustness against systematic errors, show a
decline in performance due to the presence of stochastic errors and limited
coherence time. In this work, we develop a recipe for mitigating quantum gate
errors for variational algorithms using zero-noise extrapolation. We introduce
an experimentally amenable method to control error strength in the circuit. We
utilise the fact that gate errors in a physical quantum device are distributed
inhomogeneously over different qubits and pairs thereof. As a result, one can
achieve different circuit error sums based on the manner in which abstract
qubits in the circuit are mapped to a physical device. We find that the
estimated energy in the variational approach is approximately linear with
respect to the circuit error sum (CES). Consequently, a linear fit through the
energy-CES data, when extrapolated to zero CES, can approximate the energy
estimated by a noiseless variational algorithm. We demonstrate this numerically
and further prove that the approximation is exact if the two-qubit gates in the
circuits are arranged in the form of a regular graph.Comment: 9 pages, 2 figure
Ion native variational ansatz for quantum approximate optimization
Variational quantum algorithms involve training parameterized quantum
circuits using a classical co-processor. An important variational algorithm,
designed for combinatorial optimization, is the quantum approximate
optimization algorithm. Realization of this algorithm on any modern quantum
processor requires either embedding a problem instance into a Hamiltonian or
emulating the corresponding propagator by a gate sequence. For a vast range of
problem instances this is impossible due to current circuit depth and hardware
limitations. Hence we adapt the variational approach -- using ion native
Hamiltonians -- to create ansatze families that can prepare the ground states
of more general problem Hamiltonians. We analytically determine symmetry
protected classes that make certain problem instances inaccessible unless this
symmetry is broken. We exhaustively search over six qubits and consider upto
twenty circuit layers, demonstrating that symmetry can be broken to solve all
problem instances of the Sherrington-Kirkpatrick Hamiltonian. Going further, we
numerically demonstrate training convergence and level-wise improvement for up
to twenty qubits. Specifically these findings widen the class problem instances
which might be solved by ion based quantum processors. Generally these results
serve as a test-bed for quantum approximate optimization approaches based on
system native Hamiltonians and symmetry protection.Comment: 9 pages; 5 figures; REVTe
Bell-CHSH non-locality and entanglement from a unified framework
Non-classical probability is a defining feature of quantum mechanics. This paper develops a formalism that exhibits explicitly, the manner in which rules of classical probability break down in the quantum domain. Thereby, a framework is set up which allows for construction of signatures for non-classicality of states in a systematic manner. Using this, conditions for non-locality and entanglement are shown to emerge from a break down of classical probability rules. Bell-CHSH non-locality is derived for any bipartite systems and entanglement inequalities are obtained for coupled two level systems only
Revisiting integer factorization using closed timelike curves
Closed timelike curves are relativistically valid objects allowing time travel to the past. Treating them as computational objects opens the door to a wide range of results which cannot be achieved using non-relativistic quantum mechanics. Recently, research in classical and quantum computation has focused on effectively harnessing the power of these curves. In particular, Brun (Found Phys Lett 16:245-253, 2003) has shown that CTCs can be utilized to efficiently solve problems like factoring and quantified satisfiability problem. In this paper, we find a flaw in Brun's algorithm and propose a modified algorithm to circumvent the flaw
Tensor networks in machine learning
A tensor network is a type of decomposition used to express and approximate
large arrays of data. A given data-set, quantum state or higher dimensional
multi-linear map is factored and approximated by a composition of smaller
multi-linear maps. This is reminiscent to how a Boolean function might be
decomposed into a gate array: this represents a special case of tensor
decomposition, in which the tensor entries are replaced by 0, 1 and the
factorisation becomes exact. The collection of associated techniques are
called, tensor network methods: the subject developed independently in several
distinct fields of study, which have more recently become interrelated through
the language of tensor networks. The tantamount questions in the field relate
to expressability of tensor networks and the reduction of computational
overheads. A merger of tensor networks with machine learning is natural. On the
one hand, machine learning can aid in determining a factorization of a tensor
network approximating a data set. On the other hand, a given tensor network
structure can be viewed as a machine learning model. Herein the tensor network
parameters are adjusted to learn or classify a data-set. In this survey we
recover the basics of tensor networks and explain the ongoing effort to develop
the theory of tensor networks in machine learning.Comment: 7 page